Quasi-convex optimization

نویسندگان

چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Quasi-Newton Approach to Nonsmooth Convex Optimization A Quasi-Newton Approach to Nonsmooth Convex Optimization

We extend the well-known BFGS quasi-Newton method and its limited-memory variant (LBFGS) to the optimization of nonsmooth convex objectives. This is done in a rigorous fashion by generalizing three components of BFGS to subdifferentials: The local quadratic model, the identification of a descent direction, and the Wolfe line search conditions. We apply the resulting subLBFGS algorithm to L2-reg...

متن کامل

Proximal Quasi-Newton Methods for Convex Optimization

In [19], a general, inexact, e cient proximal quasi-Newton algorithm for composite optimization problems has been proposed and a sublinear global convergence rate has been established. In this paper, we analyze the convergence properties of this method, both in the exact and inexact setting, in the case when the objective function is strongly convex. We also investigate a practical variant of t...

متن کامل

Beyond Convexity: Stochastic Quasi-Convex Optimization

Stochastic convex optimization is a basic and well studied primitive in machine learning. It is well known that convex and Lipschitz functions can be minimized efficiently using Stochastic Gradient Descent (SGD). The Normalized Gradient Descent (NGD) algorithm, is an adaptation of Gradient Descent, which updates according to the direction of the gradients, rather than the gradients themselves. ...

متن کامل

Proximal Quasi-Newton Methods for Nondifferentiable Convex Optimization

Some global convergence properties of a variable metric algorithm for minimization without exact line searches, in R. 23 superlinear convergent algorithm for minimizing the Moreau-Yosida regularization F. However, this algorithm makes use of the generalized Jacobian of F, instead of matrices B k generated by a quasi-Newton formula. Moreover, the line search is performed on the function F , rath...

متن کامل

A Quasi-Newton Approach to Nonsmooth Convex Optimization

We extend the well-known BFGS quasiNewton method and its limited-memory variant (LBFGS) to the optimization of nonsmooth convex objectives. This is done in a rigorous fashion by generalizing three components of BFGS to subdifferentials: The local quadratic model, the identification of a descent direction, and the Wolfe line search conditions. We apply the resulting sub(L)BFGS algorithm to L2-re...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Journal of Mathematical Analysis and Applications

سال: 1986

ISSN: 0022-247X

DOI: 10.1016/s0022-247x(86)80008-7